Goto

Collaborating Authors

 data removal




Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization

Neural Information Processing Systems

The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computational cost on large-scale data. This paper presents a prompt certified machine unlearning algorithm, PCMU, which executes one-time operation of simultaneous training and unlearning in advance for a series of machine unlearning requests, without the knowledge of the removed/forgotten data. First, we establish a connection between randomized smoothing for certified robustness on classification and randomized smoothing for certified machine unlearning on gradient quantization. Second, we propose a prompt certified machine unlearning model based on randomized data smoothing and gradient quantization. We theoretically derive the certified radius R regarding the data change before and after data removals and the certified budget of data removals about R. Last but not least, we present another practical framework of randomized gradient smoothing and quantization, due to the dilemma of producing high confidence certificates in the first framework. We theoretically demonstrate the certified radius R' regarding the gradient change, the correlation between two types of certified radii, and the certified budget of data removals about R'.


Machine Unlearning of Traffic State Estimation and Prediction

Wang, Xin, Rockafellar, R. Tyrrell, Xuegang, null, Ban, null

arXiv.org Artificial Intelligence

Traffic State Estimation and Prediction (TSEP) has been extensively studied to reconstruct traffic state variables (e.g., flow, density, speed, travel time, etc.) using (partial) observed traffic data (Antoniou et al., 2013; Ban et al., 2011; Shi et al., 2021; Li et al., 2020). In recent years, advancements in data collection technologies have enabled TSEP methods to integrate traffic data from diverse sources for more accurate and robust estimation and prediction (Wang et al., 2016; Makridis and Kouvelas, 2023). These data sources can be broadly categorized into infrastructure-collected data and user-contributed data. Infrastructure-collected data typically includes information collected from loop detectors, traffic cameras, and radars installed on roadways or at intersections. In contrast, user-contributed data is derived from individuals, often through vehicles or personal devices, such as GPS traces, vehicle trajectories, and probe data collected via mobile apps or in-vehicle systems.


Factor Decorrelation Enhanced Data Removal from Deep Predictive Models

Yang, Wenhao, Li, Lin, Tao, Xiaohui, Shi, Kaize

arXiv.org Artificial Intelligence

The imperative of user privacy protection and regulatory compliance necessitates sensitive data removal in model training, yet this process often induces distributional shifts that undermine model performance-particularly in out-of-distribution (OOD) scenarios. We propose a novel data removal approach that enhances deep predictive models through factor decorrelation and loss perturbation. Our approach introduces: (1) a discriminative-preserving factor decorrelation module employing dynamic adaptive weight adjustment and iterative representation updating to reduce feature redundancy and minimize inter-feature correlations. (2) a smoothed data removal mechanism with loss perturbation that creates information-theoretic safeguards against data leakage during removal operations. Extensive experiments on five benchmark datasets show that our approach outperforms other baselines and consistently achieves high predictive accuracy and robustness even under significant distribution shifts. The results highlight its superior efficiency and adaptability in both in-distribution and out-of-distribution scenarios.




Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization

Neural Information Processing Systems

The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computational cost on large-scale data. This paper presents a prompt certified machine unlearning algorithm, PCMU, which executes one-time operation of simultaneous training and unlearning in advance for a series of machine unlearning requests, without the knowledge of the removed/forgotten data. First, we establish a connection between randomized smoothing for certified robustness on classification and randomized smoothing for certified machine unlearning on gradient quantization. Second, we propose a prompt certified machine unlearning model based on randomized data smoothing and gradient quantization.


FedUHB: Accelerating Federated Unlearning via Polyak Heavy Ball Method

Jiang, Yu, Tan, Chee Wei, Lam, Kwok-Yan

arXiv.org Artificial Intelligence

Federated learning facilitates collaborative machine learning, enabling multiple participants to collectively develop a shared model while preserving the privacy of individual data. The growing importance of the "right to be forgotten" calls for effective mechanisms to facilitate data removal upon request. In response, federated unlearning (FU) has been developed to efficiently eliminate the influence of specific data from the model. Current FU methods primarily rely on approximate unlearning strategies, which seek to balance data removal efficacy with computational and communication costs, but often fail to completely erase data influence. To address these limitations, we propose FedUHB, a novel exact unlearning approach that leverages the Polyak heavy ball optimization technique, a first-order method, to achieve rapid retraining. In addition, we introduce a dynamic stopping mechanism to optimize the termination of the unlearning process. Our extensive experiments show that FedUHB not only enhances unlearning efficiency but also preserves robust model performance after unlearning. Furthermore, the dynamic stopping mechanism effectively reduces the number of unlearning iterations, conserving both computational and communication resources. FedUHB can be proved as an effective and efficient solution for exact data removal in federated learning settings.


Towards Aligned Data Removal via Twin Machine Unlearning

Sun, Yuyao, Niu, Zhenxing, hua, Gang, jin, Rong

arXiv.org Artificial Intelligence

Modern privacy regulations have spurred the evolution of machine unlearning, a technique that enables the removal of data from an already trained ML model without requiring retraining from scratch. Previous unlearning methods tend to induce the model to achieve lowest classification accuracy on the removal data. Nonetheless, the authentic objective of machine unlearning is to align the unlearned model with the gold model, i.e., achieving the same classification accuracy as the gold model. For this purpose, we present a Twin Machine Unlearning (TMU) approach, where a twin unlearning problem is defined corresponding to the original unlearning problem. As a results, the generalization-label predictor trained on the twin problem can be transferred to the original problem, facilitating aligned data removal. Comprehensive empirical experiments illustrate that our approach significantly enhances the alignment between the unlearned model and the gold model. Meanwhile, our method allows data removal without compromising the model accuracy.